Goto

Collaborating Authors

 proxy metric


Scalable Infomin Learning

Neural Information Processing Systems

The task of infomin learning aims to learn a representation with high utility while being uninformative about a specified target, with the latter achieved by minimising the mutual information between the representation and the target. It has broad applications, ranging from training fair prediction models against protected attributes, to unsupervised learning with disentangled representations. Recent works on infomin learning mainly use adversarial training, which involves training a neural network to estimate mutual information or its proxy and thus is slow and difficult to optimise. Drawing on recent advances in slicing techniques, we propose a new infomin learning approach, which uses a novel proxy metric to mutual information. We further derive an accurate and analytically computable approximation to this proxy metric, thereby removing the need of constructing neural network-based mutual information estimators. Compared to baselines, experiments on algorithmic fairness, disentangled representation learning and domain adaptation verify that our method can more effectively remove unwanted information with limited time budget.


Revisiting the Scaling Properties of Downstream Metrics in Large Language Model Training

Krajewski, Jakub, Shidani, Amitis, Busbridge, Dan, Wiseman, Sam, Ramapuram, Jason

arXiv.org Artificial Intelligence

Large Language Models (OpenAI et al., 2024; Team et al., 2025; DeepSeek-AI et al., 2025) based on the Transformer (Vaswani et al., 2023) architecture have achieved impressive results, approaching or exceeding human-level performance across multiple domains. Scaling laws (Hestness et al., 2017; Kaplan et al., 2020) are an established method for modeling the performance of these networks, enabling researchers to plan large-scale training runs based on curated sets of smaller experiments. Traditionally, these laws focus on predicting proxy metrics for model quality, such as pre-training log-perplexity. This has proven invaluable for optimizing training hyperparameters, like the optimal ratio of tokens to parameters. Another important direction in understanding the scaling of LLMs is tracking the behavior of more interpretable indicators of model capabilities, like accuracy on downstream benchmarks measuring the performance on general knowledge, reasoning, math and coding tasks. Despite early attempts to solve this problem (Grattafiori et al., 2024; Isik et al., 2025; Chen et al., 2025), scaling downstream metrics have been often referred to as noisy and unreliable (Schaeffer et al., 2025; Lourie et al., 2025). Current approaches to modeling the downstream performance performance of LLMs (Grattafiori et al., 2024; Chen et al., 2025; Bhagia et al., 2024) typically rely on a two-stage approach, where the training budget is first mapped to a proxy metric like mean log-probability of the correct answer, and then another dependence is established, mapping to benchmark accuracy. Work done as an intern at Apple.


Signature in Code Backdoor Detection, how far are we?

Le, Quoc Hung, Le-Cong, Thanh, Le, Bach, Xu, Bowen

arXiv.org Artificial Intelligence

As Large Language Models (LLMs) become increasingly integrated into software development workflows, they also become prime targets for adversarial attacks. Among these, backdoor attacks are a significant threat, allowing attackers to manipulate model outputs through hidden triggers embedded in training data. Detecting such backdoors remains a challenge, and one promising approach is the use of Spectral Signature defense methods that identify poisoned data by analyzing feature representations through eigenvectors. While some prior works have explored Spectral Signatures for backdoor detection in neural networks, recent studies suggest that these methods may not be optimally effective for code models. In this paper, we revisit the applicability of Spectral Signature-based defenses in the context of backdoor attacks on code models. We systematically evaluate their effectiveness under various attack scenarios and defense configurations, analyzing their strengths and limitations. We found that the widely used setting of Spectral Signature in code backdoor detection is often suboptimal. Hence, we explored the impact of different settings of the key factors. We discovered a new proxy metric that can more accurately estimate the actual performance of Spectral Signature without model retraining after the defense.


RoSE: Round-robin Synthetic Data Evaluation for Selecting LLM Generators without Human Test Sets

Cegin, Jan, Pecher, Branislav, Srba, Ivan, Simko, Jakub

arXiv.org Artificial Intelligence

LLMs are powerful generators of synthetic data, which are used for training smaller, specific models. This is especially valuable for low-resource languages, where human-labelled data is scarce but LLMs can still produce high-quality text. However, LLMs differ in how useful their outputs are for training. Selecting the best LLM as a generator is challenging because extrinsic evaluation requires costly human annotations (which are often unavailable for low-resource languages), while intrinsic metrics correlate poorly with downstream performance. We introduce Round robin Synthetic data Evaluation (RoSE), a proxy metric for selecting the best LLM generator without human test sets. RoSE trains a small model on the outputs of a candidate generator (LLM) and then evaluates it on generated synthetic examples from all other candidate LLMs. The final RoSE score is the mean performance of this small model. Across six LLMs, eleven languages, and three tasks (sentiment, topic, intent), RoSE identifies the optimal generator more often than any other intrinsic heuristics. RoSE outperforms intrinsic heuristics and comes within 0.76 percentage points of the optimal generator baseline. This result is measured in terms of downstream performance, obtained by training a small model on the chosen generator's outputs (optimal vs. proxy metric selected) and evaluating it on human-labelled test data. Additionally, RoSE is the only metric to achieve a positive correlation with performance on human test data.


Profit over Proxies: A Scalable Bayesian Decision Framework for Optimizing Multi-Variant Online Experiments

Pillai, Srijesh, Chandrawat, Rajesh Kumar

arXiv.org Machine Learning

Online controlled experiments (A/B tests) are fundamental to data - driven decision - making in the digital economy. However, their real - world application is frequently compromised by two critical shortcomings: the use of statistically flawed heuristics like " p - value peeking", which inflates false positive rates, and an over - reliance on proxy metrics like conversion rates, which can lead to decisions that inadvertently harm core business profitability. This paper addresses these challenges by introducing a comp rehensive and scalable Bayesian decision framework designed for profit optimization in multi - variant (A/B/n) experiments. We propose a hierarchical Bayesian model that simultaneously estimates the probability of conversion (using a Beta - Bernoulli model) and the monetary value of that conversion (using a robust Bayesian model for the mean transaction value). Building on this, we employ a decision - theoretic stopping rule based on Expected Loss, enabling experiments to be concluded not only when a superior variant is identified but also when it becomes clear that no variant offers a practically significant improvement (stopping f or futility). The framework successfully navigates "revenue traps" where a variant with a higher conversion rate would have resulted in a net financial loss, correctly terminates futile experiments early to conserve resources, and maintains strict statisti cal integrity throughout the monitoring process. Ultimately, this work provides a practical and principled methodology for organizations to move beyond simple A/B testing towards a mature, profit - driven experimentation culture, ensuring that statistical conclusions translate directly to strategic busines s value.


The Strong, Weak and Benign Goodhart's law. An independence-free and paradigm-agnostic formalisation

Majka, Adrien, El-Mhamdi, El-Mahdi

arXiv.org Machine Learning

Goodhart's law is a famous adage in policy-making that states that ``When a measure becomes a target, it ceases to be a good measure''. As machine learning models and the optimisation capacity to train them grow, growing empirical evidence reinforced the belief in the validity of this law without however being formalised. Recently, a few attempts were made to formalise Goodhart's law, either by categorising variants of it, or by looking at how optimising a proxy metric affects the optimisation of an intended goal. In this work, we alleviate the simplifying independence assumption, made in previous works, and the assumption on the learning paradigm made in most of them, to study the effect of the coupling between the proxy metric and the intended goal on Goodhart's law. Our results show that in the case of light tailed goal and light tailed discrepancy, dependence does not change the nature of Goodhart's effect. However, in the light tailed goal and heavy tailed discrepancy case, we exhibit an example where over-optimisation occurs at a rate inversely proportional to the heavy tailedness of the discrepancy between the goal and the metric. %


The DCR Delusion: Measuring the Privacy Risk of Synthetic Data

Yao, Zexi, Krčo, Nataša, Ganev, Georgi, de Montjoye, Yves-Alexandre

arXiv.org Artificial Intelligence

Synthetic data has become an increasingly popular way to share data without revealing sensitive information. Though Membership Inference Attacks (MIAs) are widely considered the gold standard for empirically assessing the privacy of a synthetic dataset, practitioners and researchers often rely on simpler proxy metrics such as Distance to Closest Record (DCR). These metrics estimate privacy by measuring the similarity between the training data and generated synthetic data. This similarity is also compared against that between the training data and a disjoint holdout set of real records to construct a binary privacy test. If the synthetic data is not more similar to the training data than the holdout set is, it passes the test and is considered private. In this work we show that, while computationally inexpensive, DCR and other distance-based metrics fail to identify privacy leakage. Across multiple datasets and both classical models such as Baynet and CTGAN and more recent diffusion models, we show that datasets deemed private by proxy metrics are highly vulnerable to MIAs. We similarly find both the binary privacy test and the continuous measure based on these metrics to be uninformative of actual membership inference risk. We further show that these failures are consistent across different metric hyperparameter settings and record selection methods. Finally, we argue DCR and other distance-based metrics to be flawed by design and show a example of a simple leakage they miss in practice. With this work, we hope to motivate practitioners to move away from proxy metrics to MIAs as the rigorous, comprehensive standard of evaluating privacy of synthetic data, in particular to make claims of datasets being legally anonymous.


A Computational Theory for Efficient Model Evaluation with Causal Guarantees

Yan, Hedong

arXiv.org Machine Learning

In order to reduce the cost of experimental evaluation for models, we introduce a computational theory of evaluation for prediction and decision models: build evaluation model to accelerate the evaluation procedures. We prove upper bounds of generalized error and generalized causal effect error of given evaluation models. We also prove efficiency, and consistency to estimated causal effect from deployed subject to evaluation metric by prediction. To learn evaluation models, we propose a meta-learner to handle heterogeneous evaluation subjects space problem. Comparing with existed evaluation approaches, our (conditional) evaluation model reduced 24.1\%-99.0\% evaluation errors across 12 scenes, including individual medicine, scientific simulation, social experiment, business activity, and quantum trade. The evaluation time is reduced 3-7 order of magnitude comparing with experiments or simulations.


On Teacher Hacking in Language Model Distillation

Tiapkin, Daniil, Calandriello, Daniele, Ferret, Johan, Perrin, Sarah, Vieillard, Nino, Ramé, Alexandre, Blondel, Mathieu

arXiv.org Machine Learning

Post-training of language models (LMs) increasingly relies on the following two stages: (i) knowledge distillation, where the LM is trained to imitate a larger teacher LM, and (ii) reinforcement learning from human feedback (RLHF), where the LM is aligned by optimizing a reward model. In the second RLHF stage, a well-known challenge is reward hacking, where the LM over-optimizes the reward model. Such phenomenon is in line with Goodhart's law and can lead to degraded performance on the true objective. In this paper, we investigate whether a similar phenomenon, that we call teacher hacking, can occur during knowledge distillation. This could arise because the teacher LM is itself an imperfect approximation of the true distribution. To study this, we propose a controlled experimental setup involving: (i) an oracle LM representing the ground-truth distribution, (ii) a teacher LM distilled from the oracle, and (iii) a student LM distilled from the teacher. Our experiments reveal the following insights. When using a fixed offline dataset for distillation, teacher hacking occurs; moreover, we can detect it by observing when the optimization process deviates from polynomial convergence laws. In contrast, employing online data generation techniques effectively mitigates teacher hacking. More precisely, we identify data diversity as the key factor in preventing hacking. Overall, our findings provide a deeper understanding of the benefits and limitations of distillation for building robust and efficient LMs.


Asymmetrical Latent Representation for Individual Treatment Effect Modeling

Lacombe, Armand, Sebag, Michèle

arXiv.org Artificial Intelligence

Conditional Average Treatment Effect (CATE) estimation, at the heart of counterfactual reasoning, is a crucial challenge for causal modeling both theoretically and applicatively, in domains such as healthcare, sociology, or advertising. Borrowing domain adaptation principles, a popular design maps the sample representation to a latent space that balances control and treated populations while enabling the prediction of the potential outcomes. This paper presents a new CATE estimation approach based on the asymmetrical search for two latent spaces called Asymmetrical Latent Representation for Individual Treatment Effect (ALRITE), where the two latent spaces are respectively intended to optimize the counterfactual prediction accuracy on the control and the treated samples. Under moderate assumptions, ALRITE admits an upper bound on the precision of the estimation of heterogeneous effects (PEHE), and the approach is empirically successfully validated compared to the state-of-the-art